Search Results: "cfm"

9 June 2010

David Welton: A Minor Erlang Rant

In an earlier post, I compared node.js to Erlang: http://journal.dedasys.com/2010/04/29/erlang-vs-node-js - which admittedly has a whiff of apples and oranges about it. Still, though, I think there's something to it. Node.js is creating lots of buzz for itself these days. Some of that will turn into actual production use, and at that point actual companies will have a vested interest in seeing the system improved. Currently, it is not as 'good' as Erlang. Erlang's VM has a built in scheduler, so you simply don't have to worry about manually creating bite-sized chunks of processing that feed into one another, it all "just works". For instance, my current Erlang project involves reading from twitter's stream, and distributing that to IRC and the web. It's pretty simple, works well, and is quite robust. I haven't had the best of days though, and one little annoyance of the many I dealt with today really caught my eye. I need to do an HTTP POST in Erlang, and:
  1. The documentation does not have examples.
  2. Here's an example of how to do a POST:
    http:request( post, "http://scream-it.com/win/tests/posttest.cfm", [], "application/x-www-form-urlencoded", "x=2&y=3&msg=Hello%20World" , [], [] ).
  3. Aside from being ugly and not very leggible, you'll note that he's passing the parameters as a string, and also has to manually include the "x-www-form-urlencoded" header.
  4. To create that string, you'd want to url encode it. Actually, ideally, you'd just pass in a list of key/value pairs and let the library module handle it.
  5. However, there's nothing in the http module that does that.
  6. If you look through various bits of Erlang code on the web, you'll note many, many versions of url encoding and decoding, because the http module makes no mention of how one might go about doing so.
  7. It turns out, that the edoc_lib module does have a uri encode function!
  8. That isn't documented in its own man page.
  9. And certainly isn't linked to in the http page.
So, in 2010, doing an HTTP POST in Erlang is still a pain in the neck. I'd be happy to put my money where my mouth is and submit a patch (at least one for the documentation), but you have to wonder why no one has fixed the problem so far - maybe they're not very accepting of patches?

9 April 2010

David Welton: US Exports: "The Cloud"?

An Economist special report in last week's print edition talks about how the US will need focus more on savings and exports:

A special report on America's economy: Time to rebalance I've been thinking about that for a while too, especially after the dollar's recent weakness, although it has been strengthening some, lately, apparently due to the state of Greece's finances... I think that the computing industry is, in general, well poised to take advantage of that. For instance, what could be easier to export than computing power or "Software as a Service"? All it takes is a few minutes for someone in Europe to sign up to a US-based service with a credit card. For instance, compare Linode's prices and good service with most of their European competitors (gandi.net for instance, who are good people, and you have to love that they put "no bullshit" right on their front page). Not that they don't have good service in Europe, but it's very difficult to compete on price with the dollar being significantly cheaper. With the dollar where it is right now, gandi is almost, but not quite, competitive with Linode. If you don't include taxes. If the dollar weakens again, though, things could easily tilt far in Linode's favor. Besides a weak dollar, I think it will be important for companies in a position to do so in the US to focus on "the rest of the world". The US is a big, populous country where it's very easy to forget about far-off lands. Compare my home town of Eugene, Oregon to where I live in Padova. Google Maps says that it takes 7+ hours to drive to Vancouver, Canada (which, to tell the truth, isn't all that foreign in that they speak English with an accent much closer to mine than say, Alabama or Maine). Going south, Google says it's 15+ hours just to San Diego, although I think that's optimistic myself, given traffic in California. From Padova, I can be in France in 5 hours, according to Google, 3 hours to Switzerland, 4 to Innsbruck, in Austria, less than 3 hours to the capital of Slovenia, Ljubljana, and around 3 hours to Croatia, too. And if you wanted to throw in another country, the Republic of San Marino is also less than 3 hours away, according to Google's driving time estimates. You could probably live your entire life in a place like Eugene and never really deal much with foreigners, whereas here, nearby borders are both a historic and an ever-present fact. The outcome of this is that, to some degree, people in the US have traditionally focused their businesses "inwards" until they got to a certain size. Which is, of course, a natural thing to do when you have such a big, homogenous market to deal with before you even start thinking about foreign languages, different laws, exchange rates and all the hassle those things entail. However, if exchange rates hold steady or favor the US further, and internal spending remains weaker, it appears as if it may be sensible for companies to invest some time and energy to attract clients in "the rest of the world". "Cloud" (anyone got a better term? this one's awfully vague, but I want to encompass both "computing power" like Linode or Amazon's EC2, as well as "software as a service") companies likely will have a much easier time of things: for many services, it's easy to just keep running things in the US for a while, and worry about having physical or legal infrastructure abroad later. Your service might not be quite as snappy as it may be with a local server, but it'll do, if it performs a useful function. Compare that with a more traditional business where you might have to do something like open a factory abroad, or at the very least figure out the details of how to ship physical products abroad and sell them, and do so in a way that you're somewhat insured against the large array of things that could go wrong between sending your products on their merry way, and someone buying them in Oslo, Lisbon or Prague. Since this barrier to entry is lower, it makes more sense to climb over it earlier on. As an example, Linode recently did a deal to provide VPS services from a London data center, to make their service more attractive to European customers. However, they still don't appear have marketing materials translated into various languages, and presumably they don't have support staff capable of speaking languages like Chinese, German or Russian either (well, at least not in an official capacity). This isn't to pick on them; they may have considered those things and found them too much of an expense/distraction/hassle for the time being - they certainly know their business better than I do - and that they simply are content to make do with English. Other businesses, however, may decide that a local touch is important to attracting clients. What do you need to look at to make your service more attractive to people in other countries? In no particular order:

There's certainly no lack of work there, but on the other hand, it's possible to do almost all of it from wherever you happen to be located, rather than spending lots of money and time flying around to remote corners of the globe, as is still common practice in many industries.

23 March 2010

Miriam Ruiz: Ada Lovelace Day 2010

Today is March 24th, that means Ada Lovelace Day, and it is being pushed as an international day of blogging to celebrate the achievements of women in technology and science. The aim of Ada Lovelace Day is to focus on building female role models not just for girls and young women but also for those of us in tech who would like to feel that we are not alone in our endeavours. There are some very good examples of women that have been important in the development of science and technology, starting with Ada Lovelace herself (the first developer of an algorithm intended to be processed by a machine), Rear Admiral Grace Murray Hopper (developer of the first compiler for a computer programming language), Adele Goldstine (who wrote the complete technical description for the first digital computer, ENIAC), as well as the six women who did most of the programming of if (Kay McNulty, Betty Jennings, Betty Snyder, Marlyn Wescoff, Fran Bilas and Ruth Lichterman), or women scientists, or women inventors, etc. Well, I m not going to write about any of those, even when any of them would surely deserve that and more for sure. I m going to write about a woman who has definitely been very inspiring and supportive for me when I was starting to get in touch with Free Software and Debian, and who is probably the most important single reason I decided to go for it. It is definitely hard to write about someone you admire when she happens to be one of your best friends, and in fact I m pretty sure that most of the people reading this article already know her, so there s no great mistery. I m talking about Amaya Rodrigo, the first european female Debian Developer (AFAIK) and co-founder of the Debian Women project, and also member of Hispalinux Board in the golden days. The first time I met her she was giving a talk in Madrid about a project that was starting then, Debian Women, and it was very inspiring for me. Inspiring enough for me to join the project. Afterwards I ve learnt more about her, how she overcame many dificulties, like starting to work with computers quite late, among others. The real merit of a pioneer is not really to be the best techie out there, but to overcome the difficulties and doing it the best you can, when no one else has done it before. I m not going to write her biography here, it s not really the purpose of this blog entry, and you probably can ask herself directly. This blog entry is, as I said at the beginning, to highlight women in technology that I consider inspiring and relevant. You know, I admire you, Amaya :)

2 December 2009

Margarita Manterola: Barbara Liskov, mother of Object Oriented Programming, among other things

This post is about Barbara Liskov, for the Ada Lovelace Day. Barbara Liskov is the first woman in the United States of America that obtained a Ph.D. from a Computer Science department, in 1968. However, this isn't by any chance her greatest achievement. She's the creator of the CLU programming language, a language created in the mid-70s, that we would find crufty and ugly nowadays, but that with its strong emphasis in abstraction, the use of clusters (basically equivalent to what we call classes today) and iterators, was to become the rock foundation of Object Oriented Programming. Apart from that, she worked in the design of a timesharing operating system, called Venus; designed another programming language, called Argus, that was oriented to distributed applications, and also set the foundations for much of what is currently done as distributed computing. Aged 70, she's currently still working at MIT, as the leader of the Programming Methodology Group, researching ways to tolerate byzantine faults. For all this work, she received the John von Neumann medal in 2004, and the Alan Turing award in 2008. All in all, what I find the most inspiring of all her life, is the fact that she was able to pursue her career, working on a new way of creating programs, while she was also a wife and a mother; and that today, aging 70, she's still researching, leading a group, and working towards making computing better.

28 November 2009

Andrew Pollock: [life] Thinking about moving

Our lease runs out at the end of January, and we're thinking about moving to a bigger place. In the four years we've been here (yeah, we just crossed the four year mark the other day) we've often thought about trying to buy as well. We go through these phases where we really feel like buying, then we run the numbers and run screaming back to the warm bosom of renting. A couple of months ago we had the most serious foray into buying. We'd just checked out the models for Mondrian and we really liked the floorplan for Bleu, and found the price to be the least breathtaking of anything we'd looked at in the Bay Area. I got as far as talking to mortgage brokers and running the numbers, and the things that killed it for us were the property taxes and homeowners association fees. The monthly repayments would have been doable, but it'd have really been a ball and chain. We're over here to see the country as much as anything else, and if the mortgage is going to be a significant impedance on our ability to travel, then there isn't really any point in doing it. So we sadly passed up on Mondrian. The three bedroom townhouses in our current complex are going for around the $2500 a month mark, which is a pretty serious jump on what we're paying now for our two bedroom one, so Sarah's been scouring Craigslist for anything better. She found a 2 bedroom plus loft condo being rented privately in Mountain View, which we took at look at on Wednesday. The immediately downsides are it's older (the kitchen and bathroom are really a bit dated) and has no data cabling (this is something I've really loved about our current place) and no microwave oven included. The upsides are it's significantly bigger (about 500 square feet larger), the kitchen has heaps of cupboards, it has a washer and dryer, a lock-up garage, a small, fairly private yard (the rent includes a gardener), and it has what looks like a communal garden bed (the thing that really caught my eye were the compost bins). So I think overall, as long as we can live with the kitchen and bathroom, it's going to be an improvement on where we are now. The windows are all double glazed, so it should be fairly well insulated. It's got a gas furnace and gas hot water, and I think the landlord pays for the water, so I think the utilities would at best come out the same as what we're paying here. We've decided to put an application in for it and see what happens. The landlord is living overseas, so we're dealing with a real estate agent for the letting. Apparently we'd be paying the rent via PayPal or something. He's got a home warranty arrangement for maintenance, which sounds like it'll be pretty good. Meanwhile, Sarah was scouring Redfin and found a house nearby that is for sale (a short sale), which is pretty reasonably priced. We've called up a real estate agent, and we're taking a look at it tomorrow, just because we can.

26 October 2009

David Welton: The Economist on "Management Gurus"

One of the reasons why I created Squeezed Books was to "deflate" some of the hot air present in many business books, so I particularly enjoyed The Economist's cynical take on "management gurus": The three habits of highly irritating management gurus
The first is presenting stale ideas as breathtaking breakthroughs ... The second irritating habit is that of naming model firms. Mr Covey littered his speech in London with references to companies he thinks are outstandingly well managed, including, bizarrely, General Motors Saturn division, which is going out of business. ... The gurus routinely ignore such basic precautions as providing a control group. ... The third irritating habit is the flogging of management tools off the back of numbered lists or facile principles. ...
And their conclusion is spot on:
Which points to the most irritating thing of all about management gurus: that their failures only serve to stoke demand for their services. If management could indeed be reduced to a few simple principles, then we would have no need for management thinkers. But the very fact that it defies easy solutions, leaving managers in a perpetual state of angst, means that there will always be demand for books like Mr Covey s.

17 March 2009

Daniel Kahn Gillmor: Publicly-funded knowledge should be public

I live in the USA. Our government issues many grants to scientists for research via the National Institute of Health. I recently found out about the NIH's recent requirement that publicly-funded research must be published freely online within 12 months. As you can imagine, i think this is a remarkably Good Thing (though 12 months seems a little bit long for fast-moving fields). Apparently, John Conyers and several co-sponsors have introduced HR 801, which appears intended to overturn this remarkable policy, primarily for the benefit of the companies that publish scientific journals. This bill is a shame, and i had hoped for better from Rep. Conyers, who otherwise has a remarkably positive record as a legislator advocating for government transparency and the public good. Sadly, his stance on so-called "Intellectual Property" seems characterized by heavy-handed legislation designed to benefit the parties already heavily favored by the current imbalanced copyright situation. If you live in the US (and especially if you live in Conyers' district in Michigan), please send him e-mail or get in touch by phone and tell him to drop the bill. You might also check the list of cosponsors to see if one of them is more local to you. If you want to read more, Lawrence Lessig has written about this issue, addressing Congressman Conyers directly in the Huffington Post. Curiously, Rep. Conyers' web site contains no mention of HR 801.Tags: policy

6 March 2009

Martin F. Krafft: Case Logic cases ruin your CDs

While I have most of my albums encoded as Ogg Vorbis files, the music from my college days (and before) is still only on disc, either in a big box in storage, or in one of a couple Case Logic CD wallets I used back then to lug my tunes around the globe. I ve long been meaning to encode those and shove the boxes and cases into storage, but in more than a year, that hasn t happened. A few weeks ago, my adorable girlfriend offered the necessary encouragement, suspecting that I might enjoy going through old music again. Right she was: it s great fun. I didn t think I was ever going to listen to Drum and Bass again, and now I am quite enjoying the music I listened to in high school. Unfortunately, while encoding all those discs, there s a pattern emerging: the discs from the box are all processed without any problems; the discs from the wallets yield many read errors. Inspecting the physical media, the cause seem to be scratches in the plastic deep enough to damage the reflective layer. When CDs came out, they were touted to be rigid and sturdy. The material quality has noticeably decreased in the years, as the producers kept cutting their costs. Discs of the past 15 years aren t good enough anymore to be stored in the Case Logic way. I wish I had known 15 years ago. NP: LTJ Bukem: Logical Progression

22 February 2009

Theodore Ts'o: Should Filesystems Be Optimized for SSD s?

In one of the comments to my last blog entry, an anonymous commenter writes:
You seem to be taking a different perspective to linus on the adapting to the the disk technology front (Linus seems to against having to have the OS know about disk boundaries and having to do levelling itself)
That s an interesting question, and I figure it s worth its own top-level entry, as opposed to a reply in the comment stream. One of the interesting design questions in any OS or Computer Architecture is where the abstraction boundaries should be drawn and which side of an abstraction boundary should various operations be pushed. Linus s arguments is that there a flash controller can do a better job of wear leveling, including detecting how worn a particular flash cell might be (for example, perhaps by looking at the charge levels at an analog level and knowing when the last time the cell was programmed), and so it doesn t make sense to try to do wear leveling in a flash file system. Some responsibilities of flash management, such as coalescing newly written blocks into erase blocks to avoid write amplification can be done either on the SSD or in the file system for example, by using a log-structured file system, or some other copy-on-write file system, instead of a rewrite-in-place style file system, you can essentially solve the write amplification problem. In some cases, it s necessary let additional information leak across the abstraction for example, the ATA TRIM command is a way for the file system to let the disk know that certain blocks no longer need to be used. If too much information needs to be pushed across the abstraction, one way or another, then maybe we need to rethink whether the abstraction barrier is in the right place. In addition, if the abstraction has been around for a long time, changing it also has costs, which has to be taken into account. The 512 byte sector LBA abstraction has been around long time, and therefore dislodging it is difficult and costly. For example, the same argument which says that because the underlying hardware details are changing between different generations of SSD is all of these details should be hidden in hardware, was also used to justify something that has been a complete commercial failure for years if not decades: Object Based Disks. One of the arguments of OBD s was that the hard drive has the best knowledge of how and where to store an contiguous stream of bytes, and so perhaps filesystems should not be trying to decide where on disk an inode should be stored, but instead tell the hard drive, I have this object, which is 134 kilobytes long; please store it somewhere on the disk . At least in theory the HDD or SSD could handle all of the details of knowing the best place to store the object on the spinning magnetic media or flash media, taking into account how worn the flash is and automatically move the object around in the case of an SSD, and in the case of the HDD, the drive could know about (real) cylinder and track boundaries, and store the object in the most efficient way possible, since the drive has intimate knowledge about the low-level details of how data is stored on the disk. This theory makes a huge amount of sense; but there s only one problem. Object Based Disks have been proposed in academia and advanced R&D shops of companies like Seagate et.al. have been proposing them for over a decade, with absolutely nothing to show for it. Why? There have been two reasons proposed. One is that OBD vendors were too greedy, and tried to charge too much money for OBD s. Another explanation is that the interface abstraction for OBD s was too different, and so there wasn t enough software or file systems or OS s that could take advantage of OBD s. Both explanations undoubtedly contributed to the commercial failure of OBD s, but the question is which is the bigger reason. And the reason why it is particularly important here is because at least as far as Intel s SSD strategy is concerned, its advantage is that (modulo implementation shortcomings such as the reported internal LBA remapping table fragmentation problem and the lack of ATA TRIM support) filesystems don t need to change (much) in order to take advantage of the Intel SSD and get at least decent performance. However, if the price delta is a stronger reason for its failure, then the X25-M may be in trouble. Currently the 80GB Intel X25-M has a street price of $400, so it costs roughly $5 per gigabyte. Dumb MLC SATA SSD s are available for roughly half the cost/gigabyte (64 GB for $164). So what does the market look like 12-18 months from now? If dumb SSD s are still available at 50% of the cost of smart SSD s, it would probably be worth it to make a copy-on-write style filesystem that attempts to do the wear leveling and write amplification reduction in software. Sure, it s probably more efficient to do it in hardware, but a 2x price differential might cause people will settle for a cheaper solution even if isn t the absolutely best technical choice. On the hand, if prices drop significantly, and/or dumb SSD s completely disappear from the market, then time spent now optimizing for dumb SSD s will be completely wasted. So for Linus to make the proclamation that it s completely stupid to optimize for dumb SSD s seems to be a bit premature. Market externalities for example, does Intel have patents that will prevent competing smart SSD s from entering the market and thus forcing price drops? could radically change the picture. It s not just a pure technological choice, which is what makes projections and prognostications difficult. As another example, I don t know whether or not Intel will issue a firmware update that adds ATA TRIM support to the X25-M, or how long it will take before such SSD s become available. Until ATA TRIM support becomes available, it will be advantageous to add support in ext4 for a block allocator option that aggressively reuses blocks above all else, and avoids using blocks that have never been allocated or used before, even if it causes more in-file system fragmentation and deeper extent allocation trees. The reason for this is at the moment, once a block is used by the file system, at least today, the X25-M has absolutely no idea whether we still care about the contents of that block, or whether the block has since been released when the file was deleted. However, if 20% of the SSD s blocks have never been used, the X25-M can use 20% of the flash for better garbage collection and defragmentation algorithms. And if Intel never releases a firmware update to add ATA TRIM support, then I will be out $400 out of my own pocket for an SSD that lacks this capability, and so adding a block allocator which works around limitations of the X25-M probably makes sense. If it turns out that it takes two years before disks that have ATA TRIM support show up, then it will definitely make sense to add such an optimization. (Hard drive vendors have been historically S-L-O-W to finish standardizing new features and then letting such features enter the market place, so I m not necessarily holding my breath; after all, the Linux block device layer and and file systems have been ready to send ATA TRIM support for about six months; what s taking the ATA committees and SSD vendors so long? <grin> On the other hand, if Intel releases ATA TRIM support next month, then it might not be worth my effort to add such a mount option to ext4. Or maybe Sandisk will make an ATA TRIM capable SSD available soon, and which is otherwise competitive with Intel, and I get a free sample, but it turns out another optimization on Sandisk SSD s will give me an extra 10% performance gain under some workloads. Is it worth it in that case? Hard to tell, unless I know whether such a tweak addresses an optimization problem which is fundamental, and whether or not such a tweak will either be unnecessary, or perhaps actively unhelpful in the next generation. As long as SSD manufacturers force us treat these devices as black boxes, there may be a certain amount of cargo cult science which may be forced upon us file system designers or I guess I should say, in order to be more academically respectable, we will be forced to rely more on empirical measurements leading to educated engineering estimations about what the SSD is doing inside the black box . Heh. Related posts (automatically generated):
  1. Aligning filesystems to an SSD s erase block size I recently purchased a new toy, an Intel X25-M SSD,...

Theodore Ts'o: Should Filesystems Be Optimized for SSD s?

In one of the comments to my last blog entry, an anonymous commenter writes:
You seem to be taking a different perspective to linus on the adapting to the the disk technology front (Linus seems to against having to have the OS know about disk boundaries and having to do levelling itself)
That s an interesting question, and I figure it s worth its own top-level entry, as opposed to a reply in the comment stream. One of the interesting design questions in any OS or Computer Architecture is where the abstraction boundaries should be drawn and which side of an abstraction boundary should various operations be pushed. Linus s arguments is that there a flash controller can do a better job of wear leveling, including detecting how worn a particular flash cell might be (for example, perhaps by looking at the charge levels at an analog level and knowing when the last time the cell was programmed), and so it doesn t make sense to try to do wear leveling in a flash file system. Some responsibilities of flash management, such as coalescing newly written blocks into erase blocks to avoid write amplification can be done either on the SSD or in the file system for example, by using a log-structured file system, or some other copy-on-write file system, instead of a rewrite-in-place style file system, you can essentially solve the write amplification problem. In some cases, it s necessary let additional information leak across the abstraction for example, the ATA TRIM command is a way for the file system to let the disk know that certain blocks no longer need to be used. If too much information needs to be pushed across the abstraction, one way or another, then maybe we need to rethink whether the abstraction barrier is in the right place. In addition, if the abstraction has been around for a long time, changing it also has costs, which has to be taken into account. The 512 byte sector LBA abstraction has been around long time, and therefore dislodging it is difficult and costly. For example, the same argument which says that because the underlying hardware details are changing between different generations of SSD is all of these details should be hidden in hardware, was also used to justify something that has been a complete commercial failure for years if not decades: Object Based Disks. One of the arguments of OBD s was that the hard drive has the best knowledge of how and where to store an contiguous stream of bytes, and so perhaps filesystems should not be trying to decide where on disk an inode should be stored, but instead tell the hard drive, I have this object, which is 134 kilobytes long; please store it somewhere on the disk . At least in theory the HDD or SSD could handle all of the details of knowing the best place to store the object on the spinning magnetic media or flash media, taking into account how worn the flash is and automatically move the object around in the case of an SSD, and in the case of the HDD, the drive could know about (real) cylinder and track boundaries, and store the object in the most efficient way possible, since the drive has intimate knowledge about the low-level details of how data is stored on the disk. This theory makes a huge amount of sense; but there s only one problem. Object Based Disks have been proposed in academia and advanced R&D shops of companies like Seagate et.al. have been proposing them for over a decade, with absolutely nothing to show for it. Why? There have been two reasons proposed. One is that OBD vendors were too greedy, and tried to charge too much money for OBD s. Another explanation is that the interface abstraction for OBD s was too different, and so there wasn t enough software or file systems or OS s that could take advantage of OBD s. Both explanations undoubtedly contributed to the commercial failure of OBD s, but the question is which is the bigger reason. And the reason why it is particularly important here is because at least as far as Intel s SSD strategy is concerned, its advantage is that (modulo implementation shortcomings such as the reported internal LBA remapping table fragmentation problem and the lack of ATA TRIM support) filesystems don t need to change (much) in order to take advantage of the Intel SSD and get at least decent performance. However, if the price delta is a stronger reason for its failure, then the X25-M may be in trouble. Currently the 80GB Intel X25-M has a street price of $400, so it costs roughly $5 per gigabyte. Dumb MLC SATA SSD s are available for roughly half the cost/gigabyte (64 GB for $164). So what does the market look like 12-18 months from now? If dumb SSD s are still available at 50% of the cost of smart SSD s, it would probably be worth it to make a copy-on-write style filesystem that attempts to do the wear leveling and write amplification reduction in software. Sure, it s probably more efficient to do it in hardware, but a 2x price differential might cause people will settle for a cheaper solution even if isn t the absolutely best technical choice. On the hand, if prices drop significantly, and/or dumb SSD s completely disappear from the market, then time spent now optimizing for dumb SSD s will be completely wasted. So for Linus to make the proclamation that it s completely stupid to optimize for dumb SSD s seems to be a bit premature. Market externalities for example, does Intel have patents that will prevent competing smart SSD s from entering the market and thus forcing price drops? could radically change the picture. It s not just a pure technological choice, which is what makes projections and prognostications difficult. As another example, I don t know whether or not Intel will issue a firmware update that adds ATA TRIM support to the X25-M, or how long it will take before such SSD s become available. Until ATA TRIM support becomes available, it will be advantageous to add support in ext4 for a block allocator option that aggressively reuses blocks above all else, and avoids using blocks that have never been allocated or used before, even if it causes more in-file system fragmentation and deeper extent allocation trees. The reason for this is at the moment, once a block is used by the file system, at least today, the X25-M has absolutely no idea whether we still care about the contents of that block, or whether the block has since been released when the file was deleted. However, if 20% of the SSD s blocks have never been used, the X25-M can use 20% of the flash for better garbage collection and defragmentation algorithms. And if Intel never releases a firmware update to add ATA TRIM support, then I will be out $400 out of my own pocket for an SSD that lacks this capability, and so adding a block allocator which works around limitations of the X25-M probably makes sense. If it turns out that it takes two years before disks that have ATA TRIM support show up, then it will definitely make sense to add such an optimization. (Hard drive vendors have been historically S-L-O-W to finish standardizing new features and then letting such features enter the market place, so I m not necessarily holding my breath; after all, the Linux block device layer and and file systems have been ready to send ATA TRIM support for about six months; what s taking the ATA committees and SSD vendors so long? <grin> On the other hand, if Intel releases ATA TRIM support next month, then it might not be worth my effort to add such a mount option to ext4. Or maybe Sandisk will make an ATA TRIM capable SSD available soon, and which is otherwise competitive with Intel, and I get a free sample, but it turns out another optimization on Sandisk SSD s will give me an extra 10% performance gain under some workloads. Is it worth it in that case? Hard to tell, unless I know whether such a tweak addresses an optimization problem which is fundamental, and whether or not such a tweak will either be unnecessary, or perhaps actively unhelpful in the next generation. As long as SSD manufacturers force us treat these devices as black boxes, there may be a certain amount of cargo cult science which may be forced upon us file system designers or I guess I should say, in order to be more academically respectable, we will be forced to rely more on empirical measurements leading to educated engineering estimations about what the SSD is doing inside the black box . Heh. Related posts (automatically generated):
  1. Aligning filesystems to an SSD s erase block size I recently purchased a new toy, an Intel X25-M SSD,...

20 February 2009

Biella Coleman: FVL at NYU

As an die hard anthropologist, I never thought I would cavort so much with legal types. But given the nature of my project, it is pretty much impossible to avoid. Thankfully, the legal crowd dealing with digital issues, is pretty entertaining, interesting, and fun to listen to. And probably one of my favorite legal thinkers is coming to NYU to give a talk this coming Monday. If you are into this sort of thing and have not heard FVL speak, do make the time to join.

Biella Coleman: FVL at NYU

As an die hard anthropologist, I never thought I would cavort so much with legal types. But given the nature of my project, it is pretty much impossible to avoid. Thankfully, the legal crowd dealing with digital issues, is pretty entertaining, interesting, and fun to listen to. And probably one of my favorite legal thinkers is coming to NYU to give a talk this coming Monday. If you are into this sort of thing and have not heard FVL speak, do make the time to join.

9 February 2009

Biella Coleman: Electric Sun

Welcome, Congress, to our generation s electric sun. Earlier, I had posted the Wikileaks link to these congressional reports with comments of my own but I thought I would pen down a few thoughts as I finally electronically leafed through some of them. These reports remind me a little bit of another set of reports that are publicly available, which are the Congressional Quarterly reports, which are an excellent resource for research. They are a bit dry but provide a wealth of information and perhaps more important, citations to law cases, journalist articles, and academic pieces (everything that journalistic pieces, in other words, do not do). It does 1/2 the research for you, as I like to think. The few reports I have scanned from the leaks remind me, in fact, of the CQ reports, in content, style and tone. And while I thought CQ reports published on a wide range of topics, these semi-private reports are far more extensive in terms of topics and far more specific as well:
CRS: Humane Treatment of Farm Animals: Overview and Issues, December 10, 2008
CRS: FDA s Authority to Ensure That Drugs Prescribed to Children Are Safe and Effective, December 2, 2008
CRS: African American Members of the United States Congress: 1870-2008, December 4, 2008
CRS: The Pigford Case: USDA Settlement of a Discrimination Suit by Black Farmers, January 13, 2009
CRS: Selected Legal and Policy Issues Related to Coalbed Methane Development, March 9, 2004
I look forward to hearing/learning more about how they are used (I can t imagine that each report is read by many) and why exactly they were hidden away as they don t seem to be the type of information that should be kept classified.

Biella Coleman: Electric Sun

Welcome, Congress, to our generation s electric sun. Earlier, I had posted the Wikileaks link to these congressional reports with comments of my own but I thought I would pen down a few thoughts as I finally electronically leafed through some of them. These reports remind me a little bit of another set of reports that are publicly available, which are the Congressional Quarterly reports, which are an excellent resource for research. They are a bit dry but provide a wealth of information and perhaps more important, citations to law cases, journalist articles, and academic pieces (everything that journalistic pieces, in other words, do not do). It does 1/2 the research for you, as I like to think. The few reports I have scanned from the leaks remind me, in fact, of the CQ reports, in content, style and tone. And while I thought CQ reports published on a wide range of topics, these semi-private reports are far more extensive in terms of topics and far more specific as well:
CRS: Humane Treatment of Farm Animals: Overview and Issues, December 10, 2008
CRS: FDA s Authority to Ensure That Drugs Prescribed to Children Are Safe and Effective, December 2, 2008
CRS: African American Members of the United States Congress: 1870-2008, December 4, 2008
CRS: The Pigford Case: USDA Settlement of a Discrimination Suit by Black Farmers, January 13, 2009
CRS: Selected Legal and Policy Issues Related to Coalbed Methane Development, March 9, 2004
I look forward to hearing/learning more about how they are used (I can t imagine that each report is read by many) and why exactly they were hidden away as they don t seem to be the type of information that should be kept classified.

25 January 2009

John Goerzen: Review: The Economist

A few months ago, I asked for suggestions for magazines to subscribe to. I got a lot of helpful suggestions, and subscribed to three: The New Yorker, The Atlantic, and The Economist. Today, I m reviewing the only one of the three that I m disappointed in, and it s The Economist. This comes as something of a surprise, because so many people (with the exception of Bryan O Sullivan) recommended it. Let s start with a quote from the issue that found its way to my mailbox this week:
A crowd of 2m or more is making its way to Washington, DC, to witness the inauguration of Mr Obama. Billions more will watch it on television. [link]
Every issue, I see this sort of thing all over. An estimate, or an opinion, presented as unquestioned fact, sometimes pretty clearly wrong or misleading. For weeks before Jan. 20, and even the day before, the widely-reported word from officials was that they had no idea what to expect, but if they had to guess, they d say that attendance would be between 1-2 million. In the end, the best estimates have placed attendance at 1.8 million. Would it have killed them to state that most estimates were more conservative, and to cite the source of their particular estimate? That s all I want, really, when they do things like this. I knew going into it that the magazine (to American eyes) essentially editorializes throughout, and I don t have a problem with that. But it engages in over-generalization far too often and that s just when I catch it. This was just a quick example from the first article I read in this issue; it s more blatant other places, but quite honestly I m too lazy to go look some more examples up at this hour. I do remember, though, them referring to members of Obama s cabinet as if they were certain to be, back before Obama had even announced their pick, let alone their confirmation hearings happening. One of my first issues of The Economist had a lengthy section on the global automobile market. I learned a lot about how western companies broke into markets in Asia and South America. Or at least I think I did. I don t know enough about that subject to catch them if they are over-generalizing again. The end result is that I read each issue with a mix of fascination and distrust; the topics are interesting, but I can never really tell if I m being given an accurate story. It often feels like the inside scoop, but then when I have some bit of knowledge of what the scoop is, it s often a much murkier shade of gray than The Economist s ever-confident prose lets on. Don t get me wrong; there are things about the Economist I like. But not as much as with the New Yorker or the Atlantic, so I ll let my subscription lapse after 6 months but keep reading it until then.

23 January 2009

MJ Ray: I m not going to FOSDEM 2009

FOSDEM is a place where free and open source software really means open source software and the F in FOSDEM is a token gesture to the free software movement if you believe FOSDEM organiser Philip Paeps. If that s so, I m amazed they can claim to promote the awareness of free software with a straight face. I m not going to FOSDEM again because there are better value-for-money events around now, but I do understand the motives of the various anti-Novell campaigners. The Boycott Novell site linked to my explanation of Boycotts a few years back. I really don t agree that A conference just is not complete without corporate schwag and presence. Presence? I don t really enjoy being jumped by salesmen. Schwag? There was one long-term useful item in my last conference pack, that was from an association and I would probably have bought a slightly better one if I hadn t been given it. So, I really don t understsand why FOSDEM are annoying people in part because they like branded cheap junk. I suspect most of it ends up in the bins before long. I mean, who needs a non-obvious bottle-opener? I have one on the waiters friend, another with a nice handle, one on my penknife, one on my wife s penknife and even one on the can opener s handle. There are probably a few more in my home and bottles aren t even so hard to open without a special tool anyway! Anyway, if you re going, I hope you have a safe journey. Maybe you could mention Glyn Moody s summary of the FOSDEM controversy to some people there. Call me when you re organising a trip to something like a drupalcon, wordcamp or LUGRadio Live.

2 December 2008

Jon Dowland: free kilowatts

I've just learnt that KiloWatts, one of my favourite artists to emerge from the "Soulseek" scene, has two of his EPs available for free for a limited time only. I already own Teknopera which is a fantastic slice of furious breakbeat, but I haven't heard "Quickfire" yet. "Perfected Everything", KiloWatt's song originally available as part of soulseek records compilation 001 is one of my all-time favourite electronic/dance tracks. It's also a track from an album of the same name. If you like electronica, go and give these a try!

22 November 2008

Biella Coleman: In San Francisco for the AAAs



, originally uploaded by the biella.
I have been in the beautifully dramatic city of San Francisco for a few days now to catch up with old friends, meet new ones, and present at the AAA meetings, probably the single largest gathering of anthropologists on Planet Earth. I love seeing the friends, I love going to (some) of the panels, but I (totally) loathe trying to say something of substance in 15 minutes. A lot of my work in mired in esoterica (legally, technically, and culturally) and I am giving my short talk on protests against Scientology, the Lulz, etc. etc. and well, if you are in the know, I think I can pull it off in 15 minutes. If you are not in the know, it may strike as completely odd, esoteric, opaque but at least I can say I did it for the Lulz

Biella Coleman: In San Francisco for the AAAs



, originally uploaded by the biella.
I have been in the beautifully dramatic city of San Francisco for a few days now to catch up with old friends, meet new ones, and present at the AAA meetings, probably the single largest gathering of anthropologists on Planet Earth. I love seeing the friends, I love going to (some) of the panels, but I (totally) loathe trying to say something of substance in 15 minutes. A lot of my work in mired in esoterica (legally, technically, and culturally) and I am giving my short talk on protests against Scientology, the Lulz, etc. etc. and well, if you are in the know, I think I can pull it off in 15 minutes. If you are not in the know, it may strike as completely odd, esoteric, opaque but at least I can say I did it for the Lulz

21 November 2008

David Moreno Garza: Book meme

De esta triste estancia en la capital de Colombia se rescata el que pudieron ver jugar al m tico Real Madrid contra el Millonarios.” Translation: “From this sad stay in the capital of Colombia, it was worth they saw the mitical Real Madrid play Millonarios.” From Ernesto Guevara tambi n conocido como el Che (Ernesto Guevara also known as Che), by Paco Ignacio Taibo II.

Next.

Previous.